Normal Approximation in Large Network Models

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Normal approximation for coverage models over binomial point processes

We give error bounds which demonstrate optimal rates of convergence in the CLT for the total covered volume and the number of isolated shapes, for germ-grain models with fixed grain radius over a binomial point process of n points in a toroidal spatial region of volume n. The proof is based on Stein’s method via size-biased couplings. 1 Department of Mathematics, University of Southern Californ...

متن کامل

An Approximation for Normal Vectors of Deformable Models

A physically-based deformable model proposed by Terzopoulous et al. is governed by the Lagrange’s form, that establishes the relation between the dynamics of deformable models under the influence of applied forces. The net instantaneous potential energy of deformation is derived on the basis of the geometric properties, namely the first and second fundamental forms. For simplicity, the normal v...

متن کامل

Fast Approximation Algorithms for Near-optimal Large-scale Network Monitoring

We study the problem of optimal traffic prediction and monitoring in large-scale networks. Our goal is to determine which subset of K links to monitor in order to ”best” predict the traffic on the remaining links in the network. We consider several optimality criteria. This can be formulated as a combinatorial optimization problem, belonging to the family of subset selection problems. Similar N...

متن کامل

Large Vocabulary SOUL Neural Network Language Models

This paper presents continuation of research on Structured OUtput Layer Neural Network language models (SOUL NNLM) for automatic speech recognition. As SOUL NNLMs allow estimating probabilities for all in-vocabulary words and not only for those pertaining to a limited shortlist, we investigate its performance on a large-vocabulary task. Significant improvements both in perplexity and word error...

متن کامل

Large Scale Hierarchical Neural Network Language Models

Feed-forward neural network language models (NNLMs) are known to improve both perplexity and word error rate performance for speech recognition compared with conventional ngram language models. We present experimental results showing how much the WER can be improved by increasing the scale of the NNLM, in terms of model size and training data. However, training time can become very long. We imp...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SSRN Electronic Journal

سال: 2019

ISSN: 1556-5068

DOI: 10.2139/ssrn.3377709